Small sample size generalization
نویسنده
چکیده
The generalization of linear classifiers is considered for training sample sizes smaller than the feature size. It is shown that there exists a good linear classifier, that is better than the Nearest Mean classifier for sample sizes for which Fisher’s linear discriminant cannot be used. The use and performance of this small sample size classifier is illustrated by some examples.
منابع مشابه
Trainable fusion rules. II. Small sample-size effects
Profound theoretical analysis is performed of small-sample properties of trainable fusion rules to determine in which situations neural network ensembles can improve or degrade classification results. We consider small sample effects, specific only to multiple classifiers system design in the two-category case of two important fusion rules: (1) linear weighted average (weighted voting), realize...
متن کاملLearning curves from a modified VC-formalism: a case study
In this paper we present a case study of a 1-dimensional higher order neuron using a statistical approach to learning theory which incorporates some information on the distribution on the sample space and can be viewed as a modification of the Vapnik-Chervonenkis formalism (VC-formalism). We concentrate on learning curves defined as averages of the worst generalization error of binary hypothesi...
متن کاملThe analysis of very small samples of repeated measurements I: an adjusted sandwich estimator.
The statistical analysis of repeated measures or longitudinal data always requires the accommodation of the covariance structure of the repeated measurements at some stage in the analysis. The general linear mixed model is often used for such analyses, and allows for the specification of both a mean model and a covariance structure. Often the covariance structure itself is not of direct interes...
متن کاملAn Incremental Learning Algorithm That Optimizes Network Size and Sample Size in One Trial
| A constructive learning algorithm is described that builds a feedforward neural network with an optimal number of hidden units to balance convergence and generalization. The method starts with a small training set and a small network, and expands the training set incrementally after training. If the training does not converge, the network grows incrementally to increase its learning capacity....
متن کاملAn Examination of Psychometric Characteristics and Factor Structure of Death Anxiety Scale Within a Sample of Iranian Patients With Heart Disease
Background and aims: <span style="color: #221e1f; font-family: Optima ...
متن کامل